Skip to main content

Documentation Index

Fetch the complete documentation index at: https://superdoc-dependabot-npm_and_yarn-npm_and_yarn-e04d5d616f.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Overview

SuperDoc API implements rate limiting to ensure fair usage and maintain service quality for all users. Rate limits are applied per API key and are based on your subscription plan.

Rate Limit Tiers

Free Tier

100 requests/hour - 1,000 requests/day - 5MB max file size - Best for testing and small projects

Pro Tier

1,000 requests/hour - 10,000 requests/day - 25MB max file size - Priority support

Enterprise

Custom limits - Unlimited requests* - 100MB+ file sizes - Dedicated infrastructure
*Subject to fair usage policy

Rate Limit Headers

Every API response includes rate limit information in the headers:
HTTP/1.1 200 OK
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1673612400
X-RateLimit-Retry-After: 3600
X-RateLimit-Limit
number
Total number of requests allowed in the current window
X-RateLimit-Remaining
number
Number of requests remaining in the current window
X-RateLimit-Reset
number
Unix timestamp when the current window resets
X-RateLimit-Retry-After
number
Seconds to wait before making another request (only present when rate limited)

Rate Limit Windows

Rate limits are calculated using sliding windows:
Limit TypeWindow SizeReset Behavior
Hourly60 minutesRolling window
Daily24 hoursResets at midnight UTC
Sliding windows provide fairer usage distribution compared to fixed windows.

Handling Rate Limits

429 Rate Limited Response

When you exceed your rate limit, the API returns:
{
  "code": "RATE_LIMIT_EXCEEDED",
  "error": "Too Many Requests",
  "message": "Rate limit exceeded. Try again in 3600 seconds.",
  "requestId": "req_abc123",
  "retryAfter": 3600
}

Best Practices

When you receive a 429 response:
async function convertWithRetry(file, maxRetries = 3) {
  for (let i = 0; i < maxRetries; i++) {
    try {
      return await convert(file);
    } catch (error) {
      if (error.status === 429) {
        const delay = Math.pow(2, i) * 1000; // Exponential backoff
        await new Promise(resolve => setTimeout(resolve, delay));
        continue;
      }
      throw error;
    }
  }
  throw new Error('Max retries exceeded');
}
Check remaining requests before making calls:
const response = await fetch('/v1/convert', options);
const remaining = response.headers.get('X-RateLimit-Remaining');

if (remaining < 10) {
  console.warn('Approaching rate limit');
  // Implement queuing or delay logic
}
For large volumes, process files in batches:
async function batchConvert(files, batchSize = 10) {
  const results = [];
  
  for (let i = 0; i < files.length; i += batchSize) {
    const batch = files.slice(i, i + batchSize);
    const batchResults = await Promise.all(
      batch.map(file => convertWithRetry(file))
    );
    results.push(...batchResults);
    
    // Optional: Add delay between batches
    if (i + batchSize < files.length) {
      await new Promise(resolve => setTimeout(resolve, 1000));
    }
  }
  
  return results;
}
For high-volume scenarios, consider async processing:
  • Submit conversion jobs to a queue
  • Receive results via webhooks
  • Avoid blocking API calls

Rate Limit Strategies

Queue-Based Processing

class ConversionQueue {
  constructor(maxConcurrent = 5) {
    this.queue = [];
    this.running = 0;
    this.maxConcurrent = maxConcurrent;
  }

  async add(file) {
    return new Promise((resolve, reject) => {
      this.queue.push({ file, resolve, reject });
      this.process();
    });
  }

  async process() {
    if (this.running >= this.maxConcurrent || this.queue.length === 0) {
      return;
    }

    this.running++;
    const { file, resolve, reject } = this.queue.shift();

    try {
      const result = await convertWithRetry(file);
      resolve(result);
    } catch (error) {
      reject(error);
    } finally {
      this.running--;
      this.process(); // Process next item
    }
  }
}

Distributed Rate Limiting

For applications with multiple servers:
// Using Redis for shared rate limit state
const redis = require("redis");
const client = redis.createClient();

async function checkRateLimit(apiKey) {
  const key = `rate_limit:${apiKey}`;
  const current = await client.get(key);

  if (current && parseInt(current) >= RATE_LIMIT) {
    throw new Error("Rate limit exceeded");
  }

  await client
    .multi()
    .incr(key)
    .expire(key, 3600) // 1 hour
    .exec();
}

Upgrading Your Limits

When to Upgrade

Consider upgrading when you:
  • Consistently hit rate limits
  • Need to process large batches
  • Require higher file size limits
  • Want priority support

Plan Comparison

FeatureFreeProEnterprise
Requests/hour1001,000Custom
File size limit5MB25MB100MB+
SupportCommunityEmailDedicated
SLANone99.9%99.99%

Upgrade Your Plan

Increase your rate limits and unlock additional features

Monitoring and Alerts

Dashboard Metrics

Monitor your usage in the dashboard:
  • Real-time request counts
  • Rate limit utilization
  • Error rates
  • Response times

API Usage Alerts

Set up alerts for:
  • 80% rate limit utilization
  • Repeated 429 errors
  • Unusual traffic spikes
  • API key compromise indicators

Troubleshooting

Common rate limiting issues and solutions:

Unexpected Rate Limits

Problem: Getting 429 errors with low usage Solutions:
  • Check for multiple API keys sharing limits
  • Verify your plan tier in the dashboard
  • Contact support if limits seem incorrect

Inconsistent Rate Limiting

Problem: Rate limits vary between requests Causes:
  • Multiple servers with different clocks
  • Sliding window calculations
  • Burst vs sustained usage
Solutions:
  • Implement client-side rate limiting
  • Add jitter to request timing
  • Use queue-based processing

Contact Support

Need higher limits or custom rate limiting?

Enterprise Sales

Discuss custom rate limits and enterprise features

Technical Support

Get help with rate limiting issues